Friday, November 22, 2024
Home Blog Page 1109

Hidden humor, the software developer’s secret weapon

0
Hidden humor, the software developer's secret weapon


coder
Credit: Pixabay/CC0 Public Domain

Writing software code can be a painstaking and stressful process—and downright boring when the job is repetitive and you’re doing it remotely, alone in front of your screen.

To liven things up, many developers and testers use humor to relieve the monotony and connect with their virtual colleagues by sharing a joke. Over time, it creates a bond with fellow developers, though the humor and creativity slipped in between the lines of code are invisible to the rest of us.

“Humor creates relationships between people who are physically distant and is a good way to stave off boredom,” said Benoit Baudry, a professor in the Department of Computer Science and Operations Research at Université de Montréal. “It’s a way to build engagement.” Until recently, Baudry was at the Royal Institute of Technology in Stockholm, where he and his colleagues studied the special humor of developers.

“Developers are people who love software,” said Baudry. “So they try to create emotional bonds using the digital technology that is their work tool.”

But they have to exercise some caution about when and where they insert jokes and comments. They don’t want any of their jests to end up on Instagram.

To find out more about how they do it, Baudry and his fellow researchers circulated an online questionnaire that was posted on developer sites. More than 125 developers from around the world responded. They reported using humor most frequently in test inputs and “commits,” or changes to the code. A sly dialogue unfolds between the test lines.

The research was published in the Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society on June 6 and is also available on the arXiv preprint server.

Darth Vader, Luke et al

Baudry and his co-authors looked at Faker, a library that generates random data for use in testing code. Instead of lorem ipsum—a sequence of meaningless words commonly used as a placeholder for text on a page until it can be replaced by the real thing—developers will sprinkle their lines with cultural references such as allusions to Seinfeld or quotes from poets.

“Some references are fairly specific, others are universal: who doesn’t know the characters from Star Wars or The Matrix?” said Baudry. Characters from cult films are frequently used in titles, as are quotations. An example from Faker: “The wise animal blends into its surroundings” (a quote from the movies Dune). Nothing edgy or inappropriate, just light-hearted asides through which developers signal their interests and elicit a smile from their colleagues.

“Personally, I like to use characters from the 1998 film The Big Lebowski in error messages,” said Baudry. He is also enthusiastic about lolcommits, a utility that lets developers send a selfie when they make changes to code. “These pics foster bonds with colleagues and are a way to celebrate when the job is done,” he said.

The pioneer who paved the way

The trailblazer for quips in code was the brilliant NASA engineer and computer scientist Margaret Hamilton, who led the team that designed the system for the Apollo 11 lunar landing program in 1969. When the code was made public, people could see that it was peppered with jokes, Shakespeare quotes and references to The Wizard of Oz.

Humor in code “helps keep it fun,” one of the respondents to the survey commented. “I love it and think fondly of people writing that part of the code or comment.”

Humor “makes a codebase feel more humanized, like it was created by a real person,” another respondent said.

Naturally, there are limits to the kind of humor that can be injected into code. “It should not create a toxic or unwelcoming culture,” cautioned one respondent.

Baudry’s interest in tech humor is not new. Last year, he published a fascinating article on “Easter eggs,” features hidden in software which can be unlocked by pressing a combination of keys or correctly positioning the pointer. But unlike code humor, Easter eggs can be discovered by the public, especially in video games.

Baudry also wants the users of technology to be more aware of the behind-the-scenes human activity that produces the thousands of connections and apps that are woven into our lives. In the past, he has given talks on art and technology while projecting code onto giant screens in public places. For the love of code!

More information:
Deepika Tiwari et al, With Great Humor Comes Great Developer Engagement, Proceedings of the 46th International Conference on Software Engineering: Software Engineering in Society (2024). DOI: 10.1145/3639475.3640099. On arXiv: DOI: 10.48550/arxiv.2312.01680

Journal information:
arXiv


Citation:
Hidden humor, the software developer’s secret weapon (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-hidden-humor-software-secret-weapon.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Managing screen time by making phones slightly more annoying to use

0
Managing screen time by making phones slightly more annoying to use


Managing screen time by making phones slightly more annoying to use
Existing apps for managing screen time can abruptly lock users out of their phone. If users were in the middle of an important task, they could scramble to skip the time limit, opening themselves up to spend more time on their phone than originally intended. InteractOut’s interventions are more gradual and allow users to decide when to put down their phones while also encouraging them to think harder about their smartphone use. Credit: Jeremy Little, Michigan Engineering

The best way to help smartphone users manage their screen time may be to make phones progressively more annoying to use, according to new University of Michigan research.

The study, published in Proceedings of the CHI Conference on Human Factors in Computing Systems, shows that interfering with swiping and tapping functions is around 16% more effective at reducing screen time and the number of times an app is opened than forcibly locking users out of their phones.

The lockout strategy is used by many screen-time management apps today, and such apps also send users a notification offering more time before locking. Researchers discussed the findings Monday, May 13, at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2024) in Honolulu.

“Lockout apps are pretty disruptive, so if someone is in the middle of an important task or a game, they’ll scramble to skip through the screen timer. Then, they can forget about the time limit and spend more time on the phone than they wanted to,” said Anhong Guo, U-M assistant professor of computer science and engineering and the corresponding author of the study.






Credit: University of Michigan

The researchers’ InteractOut app is more effective at limiting screen time because it is less restrictive and harder to ignore than hard lockouts. Once the user’s designated screen limit has been reached, InteractOut can delay the phone’s response to a user’s gesture, shift where tapping motions are registered or slow the screen scrolling speed.

The strength of the delays and shifts continues to increase each time the user touches the phone, up to a pre-set maximum, and the user can decide how the app interferes with their phone use. The app’s gradual interference allows users to continue using their phone, but with a little extra difficulty.

“If we just continuously add a little bit of friction to the interaction with the phone, eventually the user becomes more aware of what they are doing because there’s a mismatch between what they expect to happen and what actually happens. That makes using smartphones a more deliberate process,” said Guo, who also is an assistant professor of information.

The researchers believe that forcing more mindfulness into otherwise mindless gesturing is the key to making smartphones less addictive.

“We want to evoke users’ awareness of using their smartphone so that they can use it more productively,” said study first author Tao Lu, a recent U-M bachelor of science graduate in computer science who is now a master’s student at the Georgia Institute of Technology.

While designing InteractOut, the researchers had to be careful not to make the phone so inconvenient that it became insufferable. To ensure their software strikes the right balance, the team tested their app’s performance in a field study of 42 participants that took place over five weeks.

In the first week of the study, the researchers reviewed how often each participant used their phones without screen time management tools. Then, each participant installed the InteractOut app on their Android phone and chose which other apps it could monitor and interfere with. The researchers fixed the participants’ daily screen time allowance to one hour, after which InteractOut began to modify swipe and tap functions inside of the specified apps.

All participants received a random swipe and tap intervention from InteractOut for a single two-week period, and their screen time was compared to a separate two-week period in which they used Timed Lockout, a widely available app that imposes hard lockouts.

The researchers found that InteractOut was not only more effective at reducing screen time for the targeted apps than a hard lockout, but it was also better received by the study participants. When the screen-time management apps were activated, around 62% of the participants kept InteractOut’s interventions on for the day, but only 36% of the participants did the same with Timed Lockout.

There is still room for improvement, however. The participants thought that InteractOut was too intrusive for some games that require precise, real-time movements. It was also less effective at limiting the amount of time spent on apps that require few tapping or swiping gestures, such as video streaming services. Guo plans to find ways to tailor the app’s interventions to be better suited for different kinds of phone apps.

The team has submitted an invention disclosure for the software and hope to eventually bring it to market.

More information:
Tao Lu et al, InteractOut: Leveraging Interaction Proxies as Input Manipulation Strategies for Reducing Smartphone Overuse, Proceedings of the CHI Conference on Human Factors in Computing Systems (2024). DOI: 10.1145/3613904.3642317

Citation:
Managing screen time by making phones slightly more annoying to use (2024, May 14)
retrieved 24 June 2024
from https://techxplore.com/news/2024-05-screen-slightly-annoying.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

California lawmakers are trying to regulate AI before it’s too late

0
California lawmakers are trying to regulate AI before it's too late


openai
Credit: Unsplash/CC0 Public Domain

For four years, Jacob Hilton worked for one of the most influential startups in the Bay Area—OpenAI. His research helped test and improve the truthfulness of AI models such as ChatGPT. He believes artificial intelligence can benefit society, but he also recognizes the serious risks if the technology is left unchecked.

Hilton was among 13 current and former OpenAI and Google employees who this month signed an open letter that called for more whistleblower protections, citing broad confidentiality agreements as problematic.

“The basic situation is that employees, the people closest to the technology, they’re also the ones with the most to lose from being retaliated against for speaking up,” says Hilton, 33, now a researcher at the nonprofit Alignment Research Center, who lives in Berkeley, California.

California legislators are rushing to address such concerns through roughly 50 AI-related bills, many of which aim to place safeguards around the rapidly evolving technology, which lawmakers say could cause societal harm.

However, groups representing large tech companies argue that the proposed legislation could stifle innovation and creativity, causing California to lose its competitive edge and dramatically change how AI is developed in the state.

The effects of artificial intelligence on employment, society and culture are wide reaching, and that’s reflected in the number of bills circulating the Legislature. They cover a range of AI-related fears, including job replacement, data security and racial discrimination.

One bill, co-sponsored by the Teamsters, aims to mandate human oversight on driverless heavy-duty trucks. A bill backed by the Service Employees International Union attempts to ban the automation or replacement of jobs by AI systems at call centers that provide public benefit services, such as Medi-Cal. Another bill, written by Sen. Scott Wiener, D-San Francisco, would require companies developing large AI models to do safety testing.

The plethora of bills come after politicians were criticized for not cracking down hard enough on social media companies until it was too late. During the Biden administration, federal and state Democrats have become more aggressive in going after big tech firms.

“We’ve seen with other technologies that we don’t do anything until well after there’s a big problem,” Wiener said. “Social media had contributed many good things to society … but we know there have been significant downsides to social media, and we did nothing to reduce or to mitigate those harms. And now we’re playing catch-up. I prefer not to play catch-up.”

The push comes as AI tools are quickly progressing. They read bedtime stories to children, sort drive-through orders at fast food locations and help make music videos. While some tech enthusiasts enthuse about AI’s potential benefits, others fear job losses and safety issues.

“It caught almost everybody by surprise, including many of the experts, in how rapidly (the tech is) progressing,” said Dan Hendrycks, director of the San Francisco-based nonprofit Center for AI Safety. “If we just delay and don’t do anything for several years, then we may be waiting until it’s too late.”

Wiener’s bill, SB1047, which is backed by the Center for AI Safety, calls for companies building large AI models to conduct safety testing and have the ability to turn off models that they directly control.

The bill’s proponents say it would protect against situations such as AI being used to create biological weapons or shut down the electrical grid, for example. The bill also would require AI companies to implement ways for employees to file anonymous concerns. The state attorney general could sue to enforce safety rules.

“Very powerful technology brings both benefits and risks, and I want to make sure that the benefits of AI profoundly outweigh the risks,” Wiener said.

Opponents of the bill, including TechNet, a trade group that counts tech companies including Meta, Google and OpenAI among its members, say policymakers should move cautiously. Meta and OpenAI did not return a request for comment. Google declined to comment.

“Moving too quickly has its own sort of consequences, potentially stifling and tamping down some of the benefits that can come with this technology,” said Dylan Hoffman, executive director for California and the Southwest for TechNet.

The bill passed the Assembly Privacy and Consumer Protection Committee on Tuesday and will next go to the Assembly Judiciary Committee and Assembly Appropriations Committee, and if it passes, to the Assembly floor.

Proponents of Wiener’s bill say they’re responding to the public’s wishes. In a poll of 800 potential voters in California commissioned by the Center for AI Safety Action Fund, 86% of participants said it was an important priority for the state to develop AI safety regulations. According to the poll, 77% of participants supported the proposal to subject AI systems to safety testing.

“The status quo right now is that, when it comes to safety and security, we’re relying on voluntary public commitments made by these companies,” said Hilton, the former OpenAI employee. “But part of the problem is that there isn’t a good accountability mechanism.”

Another bill with sweeping implications for workplaces is AB 2930, which seeks to prevent “algorithmic discrimination,” or when automated systems put certain people at a disadvantage based on their race, gender or sexual orientation when it comes to hiring, pay and termination.

“We see example after example in the AI space where outputs are biased,” said Assemblymember Rebecca Bauer-Kahan, D-Orinda.

The anti-discrimination bill failed in last year’s legislative session, with major opposition from tech companies. Reintroduced this year, the measure initially had backing from high-profile tech companies Workday and Microsoft, although they have wavered in their support, expressing concerns over amendments that would put more responsibility on firms developing AI products to curb bias.

“Usually, you don’t have industries saying, ‘Regulate me,’ but various communities don’t trust AI, and what this effort is trying to do is build trust in these AI systems, which I think is really beneficial for industry,” Bauer-Kahan said.

Some labor and data privacy advocates worry that language in the proposed anti-discrimination legislation is too weak. Opponents say it’s too broad.

Chandler Morse, head of public policy at Workday, said the company supports AB 2930 as introduced. “We are currently evaluating our position on the new amendments,” Morse said.

Microsoft declined to comment.

The threat of AI is also a rallying cry for Hollywood unions. The Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists negotiated AI protections for their members during last year’s strikes, but the risks of the tech go beyond the scope of union contracts, said actors guild National Executive Director Duncan Crabtree-Ireland.

“We need public policy to catch up and to start putting these norms in place so that there is less of a Wild West kind of environment going on with AI,” Crabtree-Ireland said.

SAG-AFTRA has helped draft three federal bills related to deepfakes (misleading images and videos often involving celebrity likenesses), along with two measures in California, including AB 2602, that would strengthen worker control over use of their digital image. The legislation, if approved, would require that workers be represented by their union or legal counsel for agreements involving AI-generated likenesses to be legally binding.

Tech companies urge caution against overregulation. Todd O’Boyle, of the tech industry group Chamber of Progress, said California AI companies may opt to move elsewhere if government oversight becomes overbearing. It’s important for legislators to “not let fears of speculative harms drive policymaking when we’ve got this transformative, technological innovation that stands to create so much prosperity in its earliest days,” he said.

When regulations are put in place, it’s hard to roll them back, warned Aaron Levie, chief executive of the Redwood City, California-based cloud computing company Box, which is incorporating AI into its products.

“We need to actually have more powerful models that do even more and are more capable,” Levie said, “and then let’s start to assess the risk incrementally from there.”

But Crabtree-Ireland said tech companies are trying to slow-roll regulation by making the issues seem more complicated than they are and by saying they need to be solved in one comprehensive public policy proposal.

“We reject that completely,” Crabtree-Ireland said. “We don’t think everything about AI has to be solved all at once.”

2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Citation:
California lawmakers are trying to regulate AI before it’s too late (2024, June 24)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-california-lawmakers-ai-late.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

New soft robotic gripper designed with graphene and liquid crystals

0
New soft robotic gripper designed with graphene and liquid crystals


Graphene plus liquid crystals equals 'Hot Fingers'
Credit: Laura van Hazendonk

Eindhoven researchers have developed a soft robotic “hand” made from liquid crystals and graphene that could be used to design future surgical robots. The new work has just been published in the journal ACS Applied Materials & Interfaces.

In our future hospitals, soft robots might be used as surgical robots. But before that can happen, researchers need to figure out how to precisely control and move these deformable robots. Added to that, many current soft robots contain metals, which means that their use in water-rich environments—like the human body—is rather limited.

TU/e researchers led by Ph.D. candidate Laura van Hazendonk, Zafeiris Khalil (as part of his master’s research), Michael Debije, and Heiner Friedrich have designed a soft robotic hand or gripper made from graphene and liquid crystals (both organic materials). This opens the possibilities for such a device to be potentially and safely used in surgeries in the future.

Robots have an enormous influence on our world. For instance, in industry, robots build automobiles and televisions. In hospitals, robots—such as the da Vinci robotic surgical system—assist surgeons and allows for minimally invasive operations. And some of us even have robots to do our vacuum cleaning at home.

“Society has become dependent on robots, and we’re coming up with new ways to use them,” says Van Hazendonk, Ph.D. researcher in the Department of Chemical Engineering and Chemistry. “But in devising new ways to use them, we need to think about using different types of materials to make them.”

Thinking soft

The different materials that Van Hazendonk is referring to are fluids, gels, and elastic materials—which are all easily deformable. “Typically, robots are made from metals, which are rigid and hard. But in certain applications, robots made from hard and rigid materials limit the performance of the robot,” says Van Hazendonk. “The solution is to think soft.”

In soft robotics, the goal is to make robots from materials like fluids or gels that can deform in certain situations and then can act like robots made from traditional rigid and hard materials.

One area where soft robots look set to have a major impact is in surgical procedures. Van Hazendonk adds, “For a surgeon, many operations can be complex and delicate, and therefore require precise dexterity on the part of the surgeon. Sometimes this just isn’t possible, and they turn to robots.

“But rigid robots may not be able to access some areas with ease either. That’s where soft robots can come to the fore, and our goal was to offer the potential new helping hand for use in clamping and suturing used devices in surgeries, for example.”

Turning to Nobel materials

For their research, Van Hazendonk and her colleagues opted to use a different type of deformable material—liquid crystals—along with graphene to make a soft gripper device or “hand” with four controllable and deformable “fingers.”

Intriguingly, both liquid crystals and graphene are directly or indirectly connected to Nobel Prizes in Physics over the last 30 years or so. Back in 1991, Pierre-Gilles de Gennes won the prize for his work on order in complex matter, such as liquid crystals. And in 2010, Andre Geim and Konstantin Novoselov won the prize for their work on graphene—the super-strong material that is also transparent and an effective conductor of electricity and heat.

“A liquid crystal behaves as a liquid or a solid depending on how it is excited or perturbed. When it flows, it acts like a liquid. But in special situations, the molecules in the liquid can arrange themselves to create a regular pattern or structure, such as a crystal you would see in a solid material under a powerful microscope,” explains Van Hazendonk. “The ability for liquid crystal materials to act like this is perfect when it comes to making soft robots.”

Graphene plus liquid crystals equals 'Hot Fingers'
Credit: ACS Applied Materials & Interfaces (2024). DOI: 10.1021/acsami.4c06130

Actuator challenge

With the materials selected, the researchers set out to design and make an actuator. “Actuators control and regulate motion in robotic systems. Usually, the actuator responds or moves when supplied with electricity, air, or a fluid,” says Van Hazendonk. “In our work, we turned to something else to drive liquid-crystal network (LCN) actuators.”

The researchers designed a gripper device with four ‘fingers’ controlled using LCN actuators that are deformed thanks to the effect of heat on graphene-based heating elements or tracks in the fingers of gripper or ‘hand.”

Bending of the fingers

“When electrical current passes through the black graphene tracks, the tracks heat up and then the heat from the tracks changes the molecular structure of the liquid crystal fingers and some of the molecules go from being ordered to disordered. This leads to bending of the fingers,” says Van Hazendonk. “Once the electrical current is switched off, the heat is lost, and the gripper returns to its initial state.”

One of the biggest challenges for the researchers related to the graphene heating elements as highlighted by Heiner Friedrich, assistant professor at the Department of Chemical Engineering and Chemistry.

“We needed to make sure that they would heat to the right temperature to change the liquid crystal layer, and we needed to make sure that this could be done at safe voltages. Initially, the graphene elements didn’t reach the right temperatures at safe voltages, or they would overheat and burn the device,” says Friedrich. “This and many other important problems were solved by Zafeiris Khalil during his MSc thesis.”

The researchers didn’t let this problem deter them, and in the end, they designed an actuator that can operate without any issues at voltages less than 15 volts. And in terms of performance, the grippers can lift small objects with a mass between 70 and 100 milligrams. “This might not sound like a lot, but in medical applications such as surgery, this can be useful for the exact and miniscule movement of tiny tools, implants, or biological tissue,” says Van Hazendonk.

For Van Hazendonk—who combines her Ph.D. research with being a member of the provincial parliament of Noord-Brabant (Provinciale Staten)—this research has been eye-opening for her.

She says, “I love how this work combines a useful and tangible application. The gripper device is based on fundamental technologies, but the actuator itself could form the basis for a suite of robots for use in biomedical or surgical applications in the future.”

And, in the future, Van Hazendonk and her colleagues have some interesting plans. She concludes, “We want to make a fully printed robot by figuring out a way to 3D-print the liquid-crystal layer. For our gripper, we made the layer by casting materials in a mold. Other researchers in the group of Michael Debije have shown that liquid crystals can be printed. For this gripper, we have printed the graphene layer, so it would be cool to have a fully printed device.”

More information:
Laura S. van Hazendonk et al, Hot Fingers: Individually Addressable Graphene-Heater Actuated Liquid Crystal Grippers, ACS Applied Materials & Interfaces (2024). DOI: 10.1021/acsami.4c06130

Citation:
New soft robotic gripper designed with graphene and liquid crystals (2024, June 18)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-soft-robotic-gripper-graphene-liquid.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

A symbolic model checking approach to verify quantum circuits

0
A symbolic model checking approach to verify quantum circuits


Towards error-free quantum computing: A symbolic model checking approach to verify quantum circuits
The proposed model-checking approach can be used for the specification and verification of quantum circuits with their desired properties. Credit: PeerJ Computer Science (2024). DOI: 10.7717/peerj-cs.2098

Quantum computing is a rapidly growing technology that utilizes the laws of quantum physics to solve complex computational problems that are extremely difficult for classical computing. Researchers worldwide have developed many quantum algorithms to take advantage of quantum computing, demonstrating significant improvements over classical algorithms.

Quantum circuits, which are models of quantum computation, are crucial for developing these algorithms. They are used to design and implement quantum algorithms before actual deployment on quantum hardware.

Quantum circuits comprise a sequence of quantum gates, measurements, and initializations of qubits, among other actions. Quantum gates perform quantum computations by operating on qubits, which are the quantum counterparts of classical bits (0s and 1s), and by manipulating the quantum states of the system. Quantum states are the output of quantum circuits, which can be measured to obtain classical outcomes with probabilities, from which further actions can be done.

Since quantum computing is often counter-intuitive and dramatically different from classical computing, the probability of errors is much higher. Hence, it is necessary to verify that quantum circuits have the desired properties and function as intended. This can be done through model checking, a formal verification technique used to verify whether systems satisfy desired properties.

Although some model checkers are dedicated to quantum programs, there is a gap between model-checking quantum programs and quantum circuits due to different representations and no iterations in quantum circuits.

Addressing this gap, Assistant Professor Canh Minh Do and Professor Kazuhiro Ogata from Japan Advanced Institute of Science and Technology (JAIST) proposed a symbolic model checking approach.

Dr. Do explains, “Considering the success of model-checking methods for verification of classical circuits, model-checking of quantum circuits is a promising approach. We developed a symbolic approach for model checking of quantum circuits using laws of quantum mechanics and basic matrix operations using the Maude programming language.”

Their approach is detailed in a study published in the journal PeerJ Computer Science.

Maude is a high-level specification/programming language based on rewriting logic, which supports the formal specification and verification of complex systems. It is equipped with a Linear Temporal Logic (LTL) model checker, which checks whether systems satisfy the specified properties.

Additionally, Maude allows the creation of precise mathematical models of systems. The researchers formally specified quantum circuits in Maude, as a series of quantum gates and measurement applications, represented as basic matrix operations using laws of quantum mechanics with the Dirac notation. They specified the initial state and the desired properties of the system in LTL.

By using a set of quantum physics laws and basic matrix operations formalized in our specifications, quantum computation can be reasoned in Maude. They then used the built-in Maude LTL model checker to automatically verify whether quantum circuits satisfy the desired properties.

They used this approach to check several early quantum communication protocols, including Superdense Coding, Quantum Teleportation, Quantum Secret Sharing, Entanglement Swapping, Quantum Gate Teleportation, Two Mirror-image Teleportation, and Quantum Network Coding, each with increasing complexity.

They found that the original version of Quantum Gate Teleportation did not satisfy its desired property. By using this approach, the researchers notably proposed a revised version and confirmed its correctness.

These findings signify the importance of the proposed innovative approach for the verification of quantum circuits. However, the researchers also point out some limitations of their method, requiring further research.

Dr. Do says, “In the future, we aim to extend our symbolic reasoning to handle more quantum gates and more complicated reasoning on complex number operations. We also would like to apply our symbolic approach to model-checking quantum programs and quantum cryptography protocols.”

Verifying the intended operation of quantum circuits will be highly valuable in the upcoming era of quantum computing. In this context, the present approach marks the first step toward a general framework for the verification and specification of quantum circuits, paving the way for error-free quantum computing.

More information:
Canh Minh Do et al, Symbolic model checking quantum circuits in Maude, PeerJ Computer Science (2024). DOI: 10.7717/peerj-cs.2098

Citation:
Toward error-free quantum computing: A symbolic model checking approach to verify quantum circuits (2024, June 21)
retrieved 24 June 2024
from https://techxplore.com/news/2024-06-error-free-quantum-approach-circuits.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link